翻訳と辞書
Words near each other
・ TransdevTSL
・ Transdichotomous model
・ Transdifferentiation
・ TransDigm Group
・ Transdimensional Teenage Mutant Ninja Turtles
・ Transdisciplinarity
・ TransDiv
・ Transdnieper
・ Transdominion Express
・ Transducer
・ TransducerML
・ Transducin
・ Transduction
・ Transduction (biophysics)
・ Transduction (genetics)
Transduction (machine learning)
・ Transduction (physiology)
・ Transduction (psychology)
・ Transductor
・ Transe
・ Transearch International
・ Transeast Airlines
・ Transect
・ Transect (urban)
・ Transelectrica
・ Transend Networks
・ Transener
・ Transepidermal water loss
・ Transepithelial potential difference
・ Transept


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Transduction (machine learning) : ウィキペディア英語版
Transduction (machine learning)

In logic, statistical inference, and supervised learning,
transduction or transductive inference is reasoning from
observed, specific (training) cases to specific (test) cases. In contrast,
induction is reasoning from observed training cases
to general rules, which are then applied to the test cases. The distinction is
most interesting in cases where the predictions of the transductive model are
not achievable by any inductive model. Note that this is caused by transductive
inference on different test sets producing mutually inconsistent predictions.
Transduction was introduced by Vladimir Vapnik in the 1990s, motivated by
his view that transduction is preferable to induction since, according to him, induction requires
solving a more general problem (inferring a function) before solving a more
specific problem (computing outputs for new cases): "When solving a problem of
interest, do not solve a more general problem as an intermediate step. Try to
get the answer that you really need but not a more general one." A similar
observation had been made earlier by Bertrand Russell:
"we shall reach the conclusion that Socrates is mortal with a greater approach to
certainty if we make our argument purely inductive than if we go by way of 'all men are mortal' and then use
deduction" (Russell 1912, chap VII).
An example of learning which is not inductive would be in the case of binary
classification, where the inputs tend to cluster in two groups. A large set of
test inputs may help in finding the clusters, thus providing useful information
about the classification labels. The same predictions would not be obtainable
from a model which induces a function based only on the training cases. Some
people may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. An example of an algorithm in this category is the Transductive Support Vector Machine (TSVM).
A third possible motivation which leads to transduction arises through the need
to approximate. If exact inference is computationally prohibitive, one may at
least try to make sure that the approximations are good at the test inputs. In
this case, the test inputs could come from an arbitrary distribution (not
necessarily related to the distribution of the training inputs), which wouldn't
be allowed in semi-supervised learning. An example of an algorithm falling in
this category is the Bayesian Committee Machine (BCM).
==Example Problem==

The following example problem contrasts some of the unique properties of transduction against induction.
File:labels.png
A collection of points is given, such that some of the points are labeled (A, B, or C), but most of the points are unlabeled (?). The goal is to predict appropriate labels for all of the unlabeled points.
The inductive approach to solving this problem is to use the labeled points to train a supervised learning algorithm, and then have it predict labels for all of the unlabeled points. With this problem, however, the supervised learning algorithm will only have five labeled points to use as a basis for building a predictive model. It will certainly struggle to build a model that captures the structure of this data. For example, if a nearest-neighbor algorithm is used, then the points near the middle will be labeled "A" or "C", even though it is apparent that they belong to the same cluster as the point labeled "B".
Transduction has the advantage of being able to consider all of the points, not just the labeled points, while performing the labeling task. In this case, transductive algorithms would label the unlabeled points according to the clusters to which they naturally belong. The points in the middle, therefore, would most likely be labeled "B", because they are packed very close to that cluster.
An advantage of transduction is that it may be able to make better predictions with fewer labeled points, because it uses the natural breaks found in the unlabeled points. One disadvantage of transduction is that it builds no predictive model. If a previously unknown point is added to the set, the entire transductive algorithm would need to be repeated with all of the points in order to predict a label. This can be computationally expensive if the data is made available incrementally in a stream. Further, this might cause the predictions of some of the old points to change (which may be good or bad, depending on the application). A supervised learning algorithm, on the other hand, can label new points instantly, with very little computational cost.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Transduction (machine learning)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.